41 research outputs found

    Interfaces for human-centered production and use of computer graphics assets

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    AR-MoCap: Using augmented reality to support motion capture acting

    Get PDF
    Technology is disrupting the way films involving visual effects are produced. Chroma-key, LED walls, motion capture (mocap), 3D visual storyboards, and simulcams are only a few examples of the many changes introduced in the cinema industry over the last years. Although these technologies are getting commonplace, they are presenting new, unexplored challenges to the actors. In particular, when mocap is used to record the actors’ movements with the aim of animating digital character models, an increase in the workload can be easily expected for people on stage. In fact, actors have to largely rely on their imagination to understand what the digitally created characters will be actually seeing and feeling. This paper focuses on this specific domain, and aims to demonstrate how Augmented Reality (AR) can be helpful for actors when shooting mocap scenes. To this purpose, we devised a system named AR-MoCap that can be used by actors for rehearsing the scene in AR on the real set before actually shooting it. Through an Optical See-Through Head- Mounted Display (OST-HMD), an actor can see, e.g., the digital characters of other actors wearing mocap suits overlapped in real- time to their bodies. Experimental results showed that, compared to the traditional approach based on physical props and other cues, the devised system can help the actors to position themselves and direct their gaze while shooting the scene, while also improving spatial and social presence, as well as perceived effectiveness

    Immersive Virtual Reality-Based Interfaces for Character Animation

    Get PDF
    Virtual Reality (VR) has increasingly attracted the attention of the computer animation community in search of more intuitive and effective alternatives to the current sophisticated user interfaces. Previous works in the literature already demonstrated the higher affordances offered by VR interaction, as well as the enhanced spatial understanding that arises thanks to the strong sense of immersion guaranteed by virtual environments. These factors have the potential to improve the animators' job, which is tremendously skill-intensive and time-consuming. The present paper explores the opportunities provided by VR-based interfaces for the generation of 3D animations via armature deformation. To the best of the authors' knowledge, for the first time a tool is presented which allows users to manage a complete pipeline supporting the above animation method, by letting them execute key tasks such as rigging, skinning and posing within a well-known animation suite using a customizable interface. Moreover, it is the first work to validate, in both objective and subjective terms, character animation performance in the above tasks and under realistic work conditions involving different user categories. In our experiments, task completion time was reduced by 26%, on average, while maintaining almost the same levels of accuracy and precision for both novice and experienced users

    Immersive Movies: The Effect of Point of View on Narrative Engagement

    Get PDF
    Cinematic Virtual Reality (CVR) offers filmmakers a wide range of possibilities to explore new techniques regarding movie scripting, shooting and editing. Despite the many experiments performed so far with both live action and computer-generated movies, just a few studies focused on analyzing how these cinematic techniques actually affect the viewers’ experience. Like in traditional cinema, a key step for CVR screenwriters and directors is to choose from which perspective the viewers will see the scene, the so-called point of view (POV). The aim of this paper is to understand to what extent watching an immersive movie from a specific POV could impact the narrative engagement (NE), i.e., the viewers’ sensation of being immersed in the movie environment and being connected with its characters and story. Two POVs that are typically used in CVR, i.e., first-person perspective (1-PP) and external perspective (EP), are investigated through a user study in which both objective and subjective metrics were collected. The user study was carried out by leveraging two live action 360° short films with distinct scripts. The results suggest that the 1-PP experience could be more pleasant than the EP one in terms of overall NE and narrative presence, or even for all the NE dimensions if the potential of that POV is specifically exploited

    A participative system for tactics analysis in sport training based on immersive virtual reality

    Get PDF
    The use of new technologies is becoming a common practice in many competitive sports, from soccer to football, basketball, golf, tennis, swimming, etc. In particular, virtual reality (VR) is increasingly being used to cope with a number of aspects that are essential in athletes’ preparation. Within the above context, this paper presents a platform that allows coaches to interactively create and modify game tactics, which can be then visualized simultaneously by multiple players wearing VR headsets into an immersive 3D environment

    User interaction feedback in a hand-controlled interface for robot team tele-operation using wearable augmented reality

    No full text
    Continuous advancements in the field of robotics and its increasing spread across heterogeneous application scenarios make the development of ever more effective user interfaces for human-robot interaction (HRI) an extremely relevant research topic. In particular, Natural User Interfaces (NUIs), e.g., based on hand and body gestures, proved to be an interesting technology to be exploited for designing intuitive interaction paradigms in the field of HRI. However, the more sophisticated the HRI interfaces become, the more important is to provide users with an accurate feedback about the state of the robot as well as of the interface itself. In this work, an Augmented Reality (AR)-based interface is deployed on a head-mounted display to enable tele-operation of a remote robot team using hand movements and gestures. A user study is performed to assess the advantages of wearable AR compared to desktop-based AR in the execution of specific tasks

    A virtual character animation system based on reconfigurable tangible user interfaces and immersive virtual reality

    No full text
    Computer animation and, particularly, virtual character animation, are very time consuming and skill-intensive tasks, which require animators to work with sophisticated user interfaces. Tangible user interfaces (TUIs) already proved to be capable of making character animation more intuitive, and possibly more efficient, by leveraging the affordances provided by physical props that mimic the structure of virtual counterparts. The main downside of existing TUI-based animation solutions is the reduced accuracy, which is due partly to the use of mechanical parts, partly to the fact that, despite the adoption of a 3D input, users still have to work with a 2D output (usually represented by one or more views displayed on a screen). However, output methods that are natively 3D, e.g., based on virtual reality (VR), have been already exploited in different ways within computer animation scenarios. By moving from the above considerations and by building upon an existing work, this paper proposes a VR-based character animation system that combines the advantages of TUIs with the improved spatial awareness, enhanced visualization and better control on the observation point in the virtual space ensured by immersive VR. Results of a user study with both skilled and unskilled users showed a marked preference for the devised system, which was judged as more intuitive than that in the reference work, and allowed users to pose a virtual character in a lower time and with a higher accuracy

    A framework for animating customized avatars from monocular videos in virtual try-on applications

    No full text
    Generating real-time animations for customized avatars is becoming of paramount importance, especially in Virtual Try-On applications. This technology allows customers to explore or “try on” products virtually. Despite the numerous benefits of this technology, there are some aspects that prevent its applicability in real scenarios. The first limitation regards the difficulties in generating expressive avatar animations. Moreover, potential customers usually expressed concerns regarding the fidelity of the animations. To overcome these two limitations, the cur- rent paper is aimed at presenting a framework for animating customized avatars based on state-of-the-art techniques. The focus of the proposed work mainly relies on aspects regarding the animation of the customized avatars. More specifically, the framework encompasses two components. The first one automatizes the operations needed for generating the data structures used for the avatar animation. This component assumes that the mesh of the avatar is described through the Sparse Unified Part-Based Human Representation (SUPR). The second component of the framework is designed to animate the avatar through motion capture by making use of the MediaPipe Holistic pipeline. Experimental evaluations were carried out aimed at assessing the solutions proposed for pose beautification and joint estimations. Results demonstrated improvements in the quality of the reconstructed animation from both an objective and subjective point of view

    Designing interactive robotic games based on mixed reality technology

    No full text
    This paper focuses on an emerging research area represented by robotic gaming and aims to explore the design space of interactive games that combine commercial-off-the-shelf robots and mixed reality. To this purpose, a software platform is developed which allows players to interact with both physical elements and virtual content projected on the ground. A game is then created to show designers how to maximize opportunities offered by such a technology and to build playful experiences

    Virtual prototyping for the textile industry: A framework supporting the production of crocheted objects

    No full text
    Over the last years, many progresses have been made in the field of virtual proto- typing, pushed by the interest of industries and artisans. Especially in the context of the textile industry, the digitizing of the prototyping stage offers the possibility to validate the product design choices before committing to the market. This paper presents a framework for the virtual prototyping of crocheted objects. The core of the framework is an algorithm that is capable of generating the crochet- ing patterns for a given object and the corresponding instructions. The instructions are leveraged by the framework to visualize the 3D geometry of the object, and can be also used to craft it. Compared to previous works, the proposed algorithm combines a number of features (primarily, the use of parametric surfaces and the support for short rows) that can reduce the distortions in crafted object shape while also lowering computational cost; the algorithm is also able to consider material- and style-related information. The results of a comparison between the proposed algorithm and state-of-the-art approaches showed improved performance in terms of similarity of the generated shape with the target one, computation time, and appearance of the crafted object
    corecore